Scholarly Materials: Paper or Digital?

نویسنده

  • Richard E. Quandt
چکیده

The paper starts out by reviewing the so-called “library crisis” and the extensive literature on the determinants of journal prices. It discusses the impact of the recent merger activity among journal publishers and notes, among the possible remedies that have been suggested, the possibility that electronic publications may slow down the increase in journal prices. It next discusses the “productivity puzzle,” i.e., the question of why the substantial improvements in computer technology may not have been translated into productivity increases at a faster rate. While the longer-term impact on productivity is not as unfavorable as initial approaches may have suggested, the paper argues that the cost savings in producing electronic rather than paper journals tend to be overestimated, particularly because the costs of archiving are not adequately dealt with in many approaches to this problem. While much attention has been devoted to how electronic approaches can affect the costs of producing journals, relatively few people have dealt with the even more important question of how these approaches affect productivity in teaching, learning, and research. The final substantive section of the paper deals with pricing and related issues, with particular attention to price discrimination and the bundling of journals. Introduction In 1994, The Andrew W. Mellon Foundation launched a project to study the impact on scholarly communication of electronic or digital approaches to the provision of scholarly library materials.1 The Foundation announced its willingness to fund projects that would Richard E. Quandt, 162 Springdale Road, Princeton, NJ 08540 350 library trends/winter 2003 “. . . assist the adoption of new technologies for acquiring, storing, and disseminating scholarly information. The greatest emphasis is placed on concrete, practical, and cost-saving projects, while leaving a little room for exploring more visionary projects with less well-defined payoffs in the short-run. In any event, it was intended from the beginning that projects would largely use existing hardware and software technologies, rather than concentrate on inventing new types of technologies (such as designing new types of chips). In all projects funded by the Foundation, grantees must pay considerable attention to the economics of the project, that is to say to the cost side as well as to the demand side. This requires that project personnel carefully track the evolution of costs and of old and new ways of providing and accessing scholarly information. (Quandt, 1996a)” The intention was to support “a variety of natural experiments in different fields of study using diverse formats including the electronic equivalents of books, journals, manuscripts, sound recordings, photographs and working papers (Bowen, 1999).” Preparation for the initiative began in 1993, and the basic objectives of the Foundation were stated in a paper that attempted to analyze the principal difficulties facing scholarly communication and the promises of new technologies (Ekman & Quandt, 1993, rev. 1995). Between October 1994 and March 1999, the Foundation awarded, under the auspices of this program, $18,977,000 to a total of fifty-four projects. This initiative was by no means the first foray by the Foundation into an analysis of libraries, library technologies, and the economics of libraries. Two years earlier, in 1992, the Foundation sponsored the preparation and publication of a definitive analysis of the economic problems besetting research libraries (Cummings, Witte, Bowen, Lazarus, & Ekman, 1992). The principal problem appeared to be that the prices of library materials were increasing faster than library budgets, and that journal prices were increasing faster than monograph prices. Thus, for example, between 1982 and 1990, journal prices increased 131.9 percent in chemistry and physics, 125.6 percent in engineering, 91.9 percent in political science, and 58.0 percent in languages and literatures. While other authors reported slightly different figures, all agreed that the increase was most marked in science, medicine, and technology (Lynden, 1993; Ketcham & Born, 1994). Significant evidence was emerging that libraries were reducing their purchases of monographs and significantly reducing their purchases of serials, which appeared to threaten their ability to fulfill their traditional role of mediating scholarly communication. But these untoward economic developments coincided temporally with the enormously rapid development of various electronic technologies: the speed of processors, the capacity of storage devices, and the bandwidth (transmission capability) of networks. Thus, Moore’s Law, enunciated in 1965, which predicted that computing power and storage capabilities would double every eighteen months, has been reasonably accurate during the next thirty years (Fuchs, 2001) communi351 quandt/paper or digital? cation costs per one million bits have declined between 1960 and 1992 from $1 to $0.00094 and the cost of routers (per million bits transmitted) from $10 to $0.00007 (MacKie-Mason & Varian, 1993). It seemed natural to wonder whether it might not be possible to deliver the materials that form the content of scholarly communication to users in a cost-effective manner. It seems appropriate near the tenth anniversary of Cummings, et al., to revisit some of the fundamental issues of the library crisis and examine the relevance of some recent developments. Views on the Library Crisis While in a real sense there is only one “library crisis” (and not a separate book crisis and journal crisis), the major source of the problem is generally perceived to be the behavior of journal prices. To illustrate the level of journal prices, the annual subscription price of chemistry and physics journals in 1990 was reported by Cummings, et al. to be $412.66; engineering journals, $138.84; political science journals, $49.67; and language and literature journals, $30.63.2 To the extent that libraries are perceived to be in crisis, two kinds of questions can be raised: (1) Why are the subscription prices of some journals higher, and often very much higher, than those of other journals?3 (2) Why are the subscription rates of journals increasing faster than the rate of inflation and library budgets? The first question has been attacked by various authors by means of straightforward econometric studies in which the prices of various journals are regressed in cross-sectional models on a variety of explanatory variables (Peterson, 1989, 1990, 1992; Chressanthis & Chressanthis, 1994a, 1994b). Typical explanatory variables are the number of issues per year, a dummy variable indicating the presence or absence of photographs or graphs in the journal, a dummy variable indicating the presence of advertising in the journal, the number of pages published per year, the number of years that the journal has existed, a dummy variable indicating whether the publisher is for-profit or not, and dummy variables indicating the geographic location of the journal. Other variables used include measures of the quality of the journal; these measures may be based on the number of citations to the journal, or the “half-life” of articles in the journal (measured by the most recent period accounting for half the total citations), or an immediacy factor (the ratio of citations to a journal divided by the number of articles in it), or finally, on an impact factor defined as the average number of times that articles appearing in the journal in a certain preceding period are cited in a given year. Some regressions also include the individual subscription price (since a measure of the library price minus the individual price divided by the library price may be a measure of monopoly power on the assumption that the individual price is close to marginal cost). The results from these early regressions are not entirely consistent, but certain broad patterns do emerge. The subscription price of a journal is 352 library trends/winter 2003 increased the more issues it publishes per year and the more pages it publishes; an additional copy in circulation and an additional year of journal existence reduce the price; being published by a commercial publisher or in Europe substantially increases the price. Versions of these models that contain variables measuring the “quality” of the journal indicate that the higher the quality, the higher the subscription price. None of these findings is particularly surprising, and the higher price charged by commercial and European publishers tends to confirm the often articulated observation that commercial publishers are able to reap monopoly profits, particularly in the light of the well-known dominant position of publishers such as Elsevier, Kluwer, Springer Verlag, and others. The latest study of this kind is by Richard Meyer, on behalf of the Associated Colleges of the South, which had received a Foundation grant for exploring the possibilities of database sharing among its member institutions (2000). The principal new contributions of the study are the use of a much larger database than was employed by earlier studies (859 periodical titles) and an explicit test of the hypothesis that monopoly power does not increase in the electronic journal domain. This hypothesis was motivated by the straightforward observation that entry costs are lower for electronic journals than hard copy journals. The dependent variable in the regression study was either the institutional price or a measure of monopoly power, as measured by the difference between the institutional price and the individual price.4 Most of the regression coefficients (on a fairly standard set of variables for regressions of this type) have the expected signs and many are statistically significant. The major surprise was that the dummy variable measuring whether a journal is electronically available had a positive sign and was highly significant in both types of regressions, i.e., the regressions of institutional price or of (institutional price minus individual price) on the explanatory variables including the electronic availability dummy suggested that electronic availability increases monopoly power. This result is almost certainly due to a specification error and represents incomplete modeling of the interactions between electronic availability, the commercial or notfor-profit status of the publisher, and whether the journal in question was electronic-only or had a hard-copy variant. The study also finds a weak negative relationship between price and circulation, which it interprets to mean that price increases result in cancellations, hence lower circulation. While this is a correct interpretation of a dynamic process (Quandt, 1996b), in a cross-sectional analysis a more proper interpretation is that journals with large circulation are able to set lower prices because they can spread firstcopy costs over a large number of units. The study also correctly recognizes that circulation may be jointly determined with price and that hence simultaneous equations estimation techniques would need to be employed, but does not actually implement this train of thought. 353 quandt/paper or digital? Finally, the study examines which journals charge prices that are, percentagewise, significantly higher than the amount predicted by the regression equation and finds, not surprisingly, that among the top twenty such journals Elsevier and Academic Press account for twelve. But in order to estimate the magnitude of the “Elsevier-Academic Press effect,” it might have been better to include dummy variables for these publishers. All in all, the results of the cross-sectional studies are reasonable and provide a great deal of insight into the static factors that determine journal pricing. The second question is why journal prices are rising faster than other price indicators, and hence deals with a dynamic process. There are hints in the static, cross-sectional studies that monopoly power has some role to play in this, because commercial publishers’ journals are typically much more expensive than those published by university presses, professional associations, and other not-for-profit organizations. Presumably, commercial publishers face an inelastic demand for their product and hence are continually attempting to raise prices in order to secure monopoly profits. But this is not the only possible explanation for the tendency of subscription prices to rise. Journals have economic value because they provide information and because they play an important role in assessing the quality of a scholar (Noll, 1996). Noll and Steinmueller (1992) ask, in the light of the undoubted negative association between circulation and subscription prices, why some journals have low and others high circulation. They find the basic reasons in the behavior of scholars themselves. Since advancement in salary and academic rank heavily depends, at least at U.S. universities, on the scholar’s publication record, scholars attempt to publish articles, but are often unable to do so in the most prestigious general journals,5 because the demand for space far outstrips its availability. Since it is very difficult to create new, top-quality, general journals, publishers accommodate academics who wish to publish by creating more specialized journals. Creating a high-quality specialized journal is not as difficult as creating a more general-purpose journal, but more specialized journals are doomed to have much lower circulation; hence first-copy costs have to be spread over a smaller number of copies, resulting in a higher subscription price. But then the general pattern may repeat itself after some time, and the more specialized journal cannot accommodate the demand for article submissions, hence even more specialized journals with even smaller circulation and higher price are spawned. There is much that is appealing about this explanation, but it is doubtful that it can ultimately explain the endless round of price increases that have been observed in the marketplace. Annual price increases, particularly for journals published by commercial publishers, have been striking at times, as is evident by examining the 1992–93 increases for Elsevier-Pergamon journals deemed to be of importance for the Scripps Institution of Oceanography Library. It is noted that price increases are justified by publishers 354 library trends/winter 2003 on the grounds that the number of pages have increased and that other changes have been introduced in the production of journals to justify the increases (as one would expect from the results of the regression studies cited earlier).6 The percentage increases in prices of titles which were expanded in scope and of those that were not expanded between 1992 and 1993 are shown in Table 1. While it is true that the journals that expanded in size or scope increased in price by a higher percentage than the others, the difference is not consequential, particularly in the light of the fact that journal publishing costs had not increased in an unusual fashion in recent times (McCabe, 1998). Moreover, attributing the price increase in individual cases to, say, an increase in the number of pages published can yield absurd answers; to wit, if the price increase in Biochimica and Biophysica Acta between 1992 and 1993 were attributed (entirely) to an increase in the number of pages published, then, using the regression coefficients of one of the Cressanthis and Cressanthis studies, the implied increase in the number of annual pages is 76,638. Table 1. Elsevier-Pergamon Price Increases, 1992–93. 1992 Prices 1993 Prices Percent Increase Elsevier Expanded Journals (18) $21,615 $28,995 34.1% Not Expanded (23) $18,006 $23,490 30.5% Pergamon Expanded Journals (10) $7,294 $9,805 34.4% Not Expanded (21) $24,898 $32,036 27.9% It has also been the case that substantial merger activity has taken place among journal publishers. Elsevier merged with Pergamon, then with Reed, and most recently, Reed-Elsevier merged with Harcourt (although the Higher Education and certain Corporate and Professional business activities of Harcourt are to be spun off to The Thomson Corporation), thus acquiring another major journal publisher, Academic Press.7 Wolters merged with Kluwer, Lippincott merged with Kluwer, and while the Reed/Elsevier– Wolters/Kluwer merger was called off in March 1998 (McCabe, 1998), the trend toward increasingly higher concentration in the journal publishing industry seems indisputable, which tends to contradict Noll’s view that market power has not increased among journal publishers (Noll, 1996, p. 13). The fundamental question is whether merger activity is likely to raise journal prices—an action that will be undertaken by a merged firm if doing so raises profits. McCabe’s theoretical model (2000) is based on the assumptions that (1) all libraries have one of two budget levels (high or low), and (2) libraries purchase periodicals in declining order of U(i)/C(i), where U(i) is the “quality” or usefulness of journal i and C(i) is its cost, until its budget is exhausted.8 McCabe’s model permits both outcomes, depending 355 quandt/paper or digital? on particular circumstances, but his empirical analysis of publisher and price data from some 3,000 journals in the 1988–98 period, which evaluates the effect of the Reed/Elsevier merger with Pergamon and the Wolters/Kluwer merger with Lippincott, indicates that in the first of these mergers there was a pure, merger-induced market-power effect, raising Elsevier journal prices by 5.2 percent and Pergamon prices by 27 percent. In the second of these mergers, Lippincott prices increased by some 30 percent (although some portion of this increase was due to an increase in the inelasticity of demand for those journals and thus did not represent pure market power), while Kluwer prices declined slightly. It seems reasonable to conclude that market power has a noticeable effect on journal prices over time. There do not appear to be many strategies in the short run that could alleviate these problems. Interlibrary loans can certainly help libraries that cannot afford certain journals, but they are time consuming (they do not deliver the product ‘just in time’) as well as costly. While the annual growth of interlibrary lending services has been impressive (9–10 percent between 1988 and 1995 (Kyrillidou, 1995)), borrowing an item was reported to cost between $9.84 and $30.27 and lending between $6.29 and $17.49 (Quandt, 1996a); the median delivery time was found to be 12.5 days (Miller & Tegler, 1988). More recent figures put the average cost of borrowing an item at $18.35 and of lending at $9.48, with borrowing turnaround time averaging 16 days (although both costs and turnaround times are somewhat, but not massively, smaller for the ten research libraries with the best performance) ( Jackson, 1997). Both the high cost and the turnaround time suggest that ILL is at best an imperfect remedy. Alternatively, a vigorous antitrust policy could perhaps slow the rate of merger-induced price increases, but since journals are often not perceived to be close substitutes, the journal market is not one in which far-reaching antitrust action is likely; in any event, this is unpromising as a short-run strategy. It was therefore entirely sensible and natural that people should look to the new electronic technologies for possible solutions to the library crisis. Analogously with the effects of automation in industrial contexts, the possibility of electronic delivery of scholarly materials appeared to promise breakthroughs in costs, and suggested that both the speed of delivery and the reach of such library materials could be greatly enhanced. In other words, the new technologies appeared to promise major gains in productivity. The Productivity Puzzle Aggregate Productivity. Although fully transistor-based mainframe computers began to be available in the late 1950s, it is not unreasonable to claim that the modern computer revolution started in the 1970s. LSI and VLSI circuits started to come into existence in the early 1970s (although integrated circuits were available throughout most of the 1960s). Nineteen seventy-one was the year in which the revolutionary Intel 4004 chip made its 356 library trends/winter 2003 appearance (Computer History and Development). Unix was invented in 1969, although the first publication about Unix appeared only in 1974 (Ritchie, 1984). The first fairly widely available personal computer (Altair 8800) appeared on the market in 1975, Apple Computer was founded in 1976, Wordstar appeared on the market in 1979, and finally, in 1981, the IBM PC made its appearance.9 From 1981 to 1992, the number of PCs in use had grown from 2 million to 65 million.10 But in looking over the decades of the 1970s and 1980s, it seems that neither the growth of GDP nor the growth of productivity reflected the massive advances in computing and their growing applications in the business world. In fact, the growth of total factor productivity dropped from an average of 1.45 percent per annum in the 1929 to 1966 period to 0.04 percent in the 1966 to 1989 period (David, 2000). The productivity paradox consists of the slowing of productivity growth in this period “in the face of phenomenal technological improvements, price declines, and real growth in computers and related IT equipment” (Moulton, 2000). Why did measured output (GDP) not grow faster and why did productivity growth perform as badly as it did? Is the computer revolution a flash in the pan and are we unrealistic to pin our hopes on it for contributing to the solution of the problems of scholarly communication? There are several reasons for believing that the apparent lack of response of the economy to the computer and information revolution should not cause us to be surprised. 1. Investment in computers is still small as a fraction of total investment in the economy. In 1996, it accounted for less than 10 percent of gross investment and, according to Daniel Sichel, investment in computer hardware contributed only 0.2 percent of the total average annual growth rate of 2.3 percent of nonfarm business output from 1980 to 1992 (Blinder & Quandt, 1997; Sichel, 1997; David, 2000). According to Moulton, the contribution was in the 0.1–0.2 percent range between 1987 and 1994 and perhaps between 0.3 and 0.4 percent thereafter (Moulton, 2000, pp. 36–37). It is difficult to imagine that a sector that is so small relative to the total could induce revolutionary changes in a short period of time. 2. Aggregate output changes (and hence, productivity measures) are obtained by deflating nominal (i.e., current-dollar denominated) output by some appropriate price index. But there is plenty of reason to believe that price indices over the relevant period have not been measuring price changes correctly, and have, in fact, overstated price increases by an average of 1.1 percent.11 Making the relevant adjustment would, according to David, increase the total factory productivity rate for 1966– 89 to the 0.64–1.14 percent range. 3. Productivity appears to have increased least in the areas in which one 357 quandt/paper or digital? can be quite certain that dramatic changes in production processes have taken place as a result of the computer revolution, namely banking, finance, insurance, and related areas. But these are the areas in which measuring quality change may be most difficult: how do we impute output to the convenience created by the existence of ATMs and by the ability of stock exchanges to process vastly greater numbers of trades? 4. New products appear at an alarming rate in computer and information technology. That brings with itself two problems, of which one is real and the other one is one of measurement. The real problem is that the avalanche of new products creates rapid obsolescence; as a result, the gross investment in computer equipment is far larger than net investment and much labor activity has to be expended on learning new computer and software systems (Blinder & Quandt, 1997, p. 29). The measurement problem is that the new products are not immediately “chained into” price indices, but only after they have achieved a minimal market share; however, the greatest price declines for new products tend to occur soon after their introduction, and hence they may not show up in price indices in a timely fashion (David, 2000, pp. 61–62). 5. Standardization and quality control, particularly in software, have been difficult to achieve since the provision of software has shifted from a few major manufacturers to hundreds of thousands of smaller providers— a development strongly associated with the rise of the personal computer. This clearly puts additional burdens on the users. 6. The implicit expectation that the introduction of computer technology would result in a nearly simultaneous increase in productivity and output growth is unrealistic and ahistorical. As David points out (2000, pp. 77–82), central electrical generating stations were introduced in 1881, but between 1899 and 1904, the electric portion of mechanical drives in manufacturing rose from 5 to only 11 percent, and the proportion of secondary electric motors in manufacturing reached the 50 percent mark only as late as the 1920s. Brynjolfsson and Hitt (2000) also find that “the effects of information technology are substantially larger when measured over longer periods,” (and also that the effects are more easily visible in firm-level data rather than aggregate data, because the latter tend to obscure or mask the quality improvements resulting from information technology). The fact is that the diffusion of innovations takes time and it is plausible to argue that we are still in the beginning phases of the computer revolution. The overall longer-term productivity promise of the computer revolution is therefore not nearly as unfavorable as initial views of the productivity puzzle might suggest. But we have a more stringent task before us: we need to come to grips with the productivity implications of innovations in the provision of scholarly materials. These materials are, first and foremost, 358 library trends/winter 2003 books and journals, but they also include visual and audio materials and multimedia materials that combine visual and audio features, such as films, etc. The urgency and possibly the difficulty of introducing innovations in these areas may well depend on the “stability” of the original materials, some of which may have a relatively high degree of stability and long-term existence, such as microfilm, microfiche, and paper, or may be relatively endangered as is material printed on acid paper, or may be rare and endangered such as materials many centuries old, or may be evanescent such as one-time performances. The question of productivity enhancements resulting from the electronic provision of scholarly materials is ambiguous without substantial further clarification. Do we mean that the object (i.e., book, journal, journal article, live performance, or whatnot) can be created with less labor or less total-factor involvement? Or do we mean that the content of the scholarly material can be delivered to the end user in a more effective manner (faster, more convenient, less subject to wear-and-tear, etc.) and preserved for posterity in a more efficient manner? Or do we mean, and I think that this is both the most interesting and most difficult question to answer, that the activities that the end users of scholarly materials engage in, i.e., teaching, learning, and research, become more productive? There does not appear to be unanimous agreement concerning any of these questions. Costs and Productivity in Producing Scholarly Information. Most of the attention of researchers on the economics of electronic libraries has been focused on the first two questions, and it is remarkable to what extent anecdotal evidence and visionary thinking has characterized the debate. One of the earliest debates was in response to a proposal by Stevan Harnad (1994), in which he proposed that authors of “esoterica” (i.e., the standard scholarly journal literature that could not conceivably earn royalties for the author) should simply post their papers on the World Wide Web by FTP, as has happened with Paul Ginsparg’s famous high-energy physics paper network, and “the long-heralded transition from paper publication to purely electronic publication . . . would follow suit almost immediately.”12 He estimated that the electronic “page costs” would amount to only about 25 percent of paper page costs, contrary to the usual estimate of 75 percent. In his model, the electronic material would be available for free for readers and costs would be recovered from authors at the rate of about $400/twenty-page article. Numerous persons participated in the debate that ensued, including Andrew Odlyzko, who identified as some of the principal costs of journal publication (a) typing or typesetting the manuscript, (b) peer review, and (c) copyediting, printing, distribution, etc. He then goes on to say that the only part of (b) that will continue to cost money is secretarial assistance, estimated to cost $100–$200 per paper, because editors typically work without compensation. This reflects the widespread fallacy that resources used without a corresponding payment (e.g., resources that are sto359 quandt/paper or digital? len) are in fact free and do not represent a social cost; as Jack P. Hailman said, “Things might be changing on the subject of paid editorships, at least my own views have changed. I served as editor of Animal Behaviour for three years (or was it five?) and never again would I devote that much of my life uncompensated.”13 A similar “free” resource is cited by Odlyzko: “Scholars can run electronic journals themselves, with no financial subsidies or subscription fees, using only the spare capacity of the computers and networks that are provided to them as part of their job” (Odlyzko, 1995). (While assessing a zero cost for such use may be correct marginal cost pricing, for a computer, every machine cycle has, in effect, zero marginal cost until the last one, and marginal cost pricing will not pay for the computer.) Odlyzko provides the most extensive statement of a new vision for scholarly journals. In the paper, based mostly on information gleaned from paper-based mathematics journals, he attempts to estimate the cost of producing and distributing journals. Editorial costs per article are estimated at $4,000 and all other costs (typesetting, distribution, etc.) at another $4,000. He suggests that both of these can be cut dramatically by turning to electronic production; the cost of production and distribution for obvious reasons and the editorial costs by reengineering the entire process, and perhaps becoming satisfied with a less-perfect appearance for journals or individual articles.14 Dispensing with the noneditorial costs of papers, we still have the print environment’s $4,000 per paper to cope with, and his calculations yield a per-paper cost of $75 for the Ginsparg high-energy physics server model. But there is some serious doubt whether the Ginsparg model can be easily transferred to other fields (Borgman, 2000, p. 89), and there is clearly a great deal of variation in these figures from journal to journal. The figures he provides for Physical Review B attribute 27 percent of the cost to editorial work and 66 percent to composition, printing, distribution, etc.; the corresponding figures for the American Economic Review are 36 percent and 38 percent respectively (Getz, 1999). Moreover, there are several journal models, and some of these do not lend themselves easily to a pareddown, electronic-only version, as in the case of journals published by professional societies with members who receive membership benefits for their annual dues that are beyond the journal itself. In contemplating a wholesale change away from print journals, it is particularly important to retain processes that ensure scientific quality control (Rowland, 1997). It is clear that ultimately the scholarly public will have to decide what services and qualities it desires from an electronic journal (“$250/paper gets you 90% of the quality that $1000/paper gets you.”) (Odlyzko, 1999). Finally, there are even those who argue that the potential of the electronic medium for more elaborate publications is so high, that first-copy costs will actually rise if it were to supplant paper on a large enough scale (Noll, 1996). Are there any firm conclusions we can reach concerning the costs of producing scholarly materials, and particularly journals, and what issues 360 library trends/winter 2003 remain in an unsettled state? First, there is general agreement that there is a substantial fixed cost (“first-copy cost”) of producing a journal and a much lower marginal cost, which is small but nonzero for paper-based journals and is in effect zero or negligible for electronic products. Second, there is no doubt that electronic approaches to producing journals have the potential for substantial savings, perhaps ranging from 25–75 percent of the corresponding costs of paper-based journals. However, it is by no means clear that all the costs involved in a careful editorial process should be dispensed with. As Halliday and Oppenheim point out, readers expect certain qualities and the “quick-and-dirty” method may just shift the burden onto librarians (2000). They, in fact, undertake an interesting simulation of the costs of three models: (1) The traditional model in which editors and referees are unpaid and production and delivery costs are recovered through subscriptions, (2) The “Harnad model,” and (3) A free market model in which authors pay charges but receive royalties and editors and referees work for free or minimal honoraria. Under a variety of assumptions concerning subscribers (where relevant), paper rejection rates, and overhead rates, they compute the range of subscription rates and page charges (wherever these are relevant). The traditional model with 500 subscribers produces annual subscription rates ranging from $308 to $510. The Harnad model produces much higher per-page charges to authors than Harnad himself proposed (except in the case of a 90 percent rejection rate); Halliday and Oppenheim note that the New Journal of Physics, a new electronic journal following this model, charges $500/paper, but has published only twentyseven papers in eighteen months. The market model produces subscription rates not too dissimilar to those of the traditional model, but very high per-page charges. These are interesting insights, but it is a fact that no rigorous studies seem to exist as yet of the cost structure of paper versus electronic journals and most of the “data” adduced by partisans on one or the other side are based on personal experience in a limited number of fields or with a limited number of publications. But just as the cost structure of libraries can and has been studied by statistical analyses of library outputs (number of reference services provided, volume of circulation, number of interlibrary loans, etc.) and library inputs (number of professional and support staff, acquisitions, and stock of books and journals, etc.) (Charnes, Cooper, Lewin, & Seiford, 1994; Hayes, 2000), so the costs of running journals could be studied by analyzing the relationship between types and quantity of costs and services provided. It would be extremely instructive to carry out such a study. Third, those that believe that the electronic medium will soon supplant paper rarely pay detailed attention to the problem of archiving—the problem of obsolescence of hardware—and generally content themselves with expressing the belief that powerful software will ultimately solve these problems. But there are several ways in which one could attempt to cope with 361 quandt/paper or digital? some of these problems such as archiving, and lack of attention to the details of this is bound to cause problems.15 Borgman notes that it is expensive to preserve materials in electronic formats and typically journal editors and publishers have not been willing to assume the responsibility for doing so (Borgman, 2000, p. 91). Fourth, the enthusiasts for electronic-only journals have predicted the imminent demise of paper journals; to wit “Traditional scholarly journals will likely disappear within 10–20 years” (Odlyzko, 1995). This prediction was made some six years ago, and while electronic journals have multiplied, paper journals have not really started to disappear; hence it is doubtful whether the time scale envisaged is right. As Rowland (1997) put it, ”It is true in theory that all the top researchers in a field could stop submitting their articles to commercial journals and refuse to referee for them, and transfer their energies to new electronic journals, thus raising their prestige. In practice it is unlikely that this will happen by voluntary action.” In fact, since paper journals tend to dominate in prestige, no individual scholar has much of an incentive to transfer his or her loyalty to electronic counterparts, which is the classic problem of public goods. Fifth, one of the basic thrusts of the argument that journals should be published by the scholars themselves at low cost is to cut out the “middleman,” i.e., the for-profit publisher who skims off the fat of the land. But it is a fatal flaw in the argument that it rests on the belief that for-profit publishers will blithely stand buy and see their livelihood eroded. In fact, seeing the technological potential of electronic publishing, one would have had to predict that for-profit publishers will also get in the act and provide their paper journals in electronic form, as well, and offer them for sale in a variety of bundled and unbundled forms.16 This, in fact, has happened and a number of publishers produce electronic versions of their journals; Elsevier alone provides over 1,200 current electronic journals, with an expanding backfile, accounting for a total of 1,463,900 articles.17 This has to be a serious obstacle to creating new, low-cost electronic journals which have to overcome not only the established prestige of an existing journal to become viable, but cannot even differentiate themselves by being electronic. Of course, journals and books are not the only kind of scholarly material that are capable of electronic delivery. Standard databases are probably among the oldest forms of materials that could be delivered electronically, perhaps initially on diskettes or CD-ROMs and increasingly over the Internet. In the past decade, other types of materials, e.g., rare and historical works, maps, art images, and manuscripts, have been digitized and are broadly accessible. While the costs of digitizing and delivering such material can be highly variable, depending among other things on the resolution required, it is worth noting that the costs of creating such databases may be offset by costs that are avoided by the scholar who has access to such databases. Thus, if a coherent body of material that is physically dispersed 362 library trends/winter 2003 is brought, so to speak, under a single electronic roof, scholars needing to consult such data may be able to avoid substantial travel costs. To the extent that digitized journals permit libraries to remove paper copies from the stacks, building space is freed up and the need for new library construction is at least postponed (Bowen, 2000). Such savings do not accrue to the library, since scholars’ travel costs and construction costs have typically been treated as external to a library’s budget, but they accrue to the scholar’s home institution as a whole, and this fact may therefore require us to think in novel ways about allocating the costs or savings of electronic “publications.” Other cost factors that need to be considered in connection with electronic journals (and other scholarly materials) are the potential savings in physical space, as well as the additional costs of hardware and software in connection with both delivery of the product to end users and archiving. But whatever cost savings may occur in the digital environment, the visionaries appear not to heed Bowen’s admonition that we “need to be realistic in thinking about costs and avoid the ever-present danger of believing that great things can be accomplished ‘on the cheap’.”18 At the opposite pole from the visionaries stand the troglodytes, the most notable recent example being Nicholson Baker (2001), who takes libraries and librarians to task for any number of wrongheaded views and activities. His point of departure is the lamentable destruction of old newspapers in many libraries for space-saving reasons and goes on to document erroneous beliefs in the impermanence of acid paper and in the virtues of microfilm, the checkered history of deacidification and the inadequacy of digitization. But quite apart from seeming to believe in massive conspiracies to destroy paper-based library materials, he is an absolutist and therefore must reject cost-benefit analyses. In fact, there is a trade-off between library space and digitizing, and while there may be a reasonable argument to the effect that not every copy of the old journals archived by JSTOR should be destroyed, there is no reason why every library should keep all its copies of these old journals. He cites (2001, p. 71) as an example of the “intolerably corrupt” optical character recognition (OCR) employed by JSTOR that a search on “modem life” returns an 1895 citation, because the “m” in modem was misread as “rn,” omitting the fact that on average, the search tool in JSTOR is exceptionally good and useful and saves scholars enormous amounts of time (not to mention the fact that the human eye is capable of even grosser errors). While his historical reflections are always interesting and often amusing, he is an enemy of digitizing scholarly materials, at least if doing so threatens the paper product.19 Productivity Enhancements in Using Electronic Materials. It has been well known for some time that productivity increases are difficult or even impossible to achieve in certain heavily personal service-dependent activities. This is often expressed by noting that it will always take four people to perform a string quartet and that you cannot improve productivity by playing 363 quandt/paper or digital? it, say, at twice the normal speed (Baumol & Bowen, 1996). It is worth asking whether the activities that scholars and students engage in, namely teaching, learning, and research, might not be of a similar variety, i.e., not easily subject to productivity enhancements. In dealing with this question, it is important to focus on what we might mean by “productivity enhancements,” and this is by no means obvious. The straightforward and easy answer is almost certainly wrong. When the catalog of a library is automated, it may be tempting to consider the number of new computer terminals in the library as a measure of enhanced productivity. When a library subscribes to various electronic databases, one may wish to use the number of databases that can be reached from it as a suitable productivity measure. To illustrate this further in a completely different context, when modern western business management programs are created in formerly socialist countries, one may use the number of graduates from such institutions as the measure of success. All of these data are relevant for something, but they do not tell us whether the teaching, learning, or research that takes place in a university has become better, more effective, or more extensive as a result of the introduction of information technology, and the number of business school graduates with MBA degrees does not tell us whether firms in the country are better managed and therefore make a greater contribution to GDP. It is quite plausible that teaching and learning can both be improved by suitable applications of information technology. In fact, improvements in teaching and learning are routinely intended and frequently accomplished by the preparation of new textbooks containing ingenious new ways of guiding the student through the subject, and new “workbooks” with better and more intuitive examples. It is entirely plausible that information technology can effect improvements by making access to information faster, broader, and qualitatively better. But it does not automatically follow from this that the quantity of learning (however measured) will increase as a result. If information is obtainable faster, it is entirely possible that students will spend the time saved on activities that enhance utility directly rather than on additional learning; productivity will have increased (because a given amount of learning can now be acquired with less labor time), but as academics, we might hope that the gain will also be translated into more learning. But the problem of measurement is even more difficult in the case of research. With faster and broader access to information, it may well be the case that a given piece of research can be accomplished in less time. The total quantity of research may increase (although any self-respecting academic promotion committee will shudder at the thought of measuring the quality of a candidate only by the quantity of his or her research!), or it may not, if professors decide to spend the time gained on utility-enhancing activities. But will the quality of research improve? This is a very difficult question and it is not obvious how to go about answering it. One might be 364 library trends/winter 2003 tempted to look at citation indices on the supposition that better research is cited more frequently. While there is merit in this hypothesis for evaluating the work of an individual scholar, there is a fundamental identification problem if the quality of research improves for all scholars: how would one know that citation frequencies have not increased just because information technology makes it easier to provide citations? There do not appear to be many studies of educational productivity enhancements resulting from the application of information technology. One careful study deals with the costs of and the learning achieved in an art history course given at Yale biannually (Bennett, 1999). The course, normally attended by about 500 students, traditionally made available 200 photographs of art objects before the midterm examination and 400 photographs before the final examination in a 480 sq. ft. gallery space for a period of several weeks; during this time students had to practice visual memorization and had to prepare themselves to identify art objects and comment on them in the examinations. Under the new system, 1,250 photographs were scanned and made available to students over the Yale intranet. Students were no longer crammed into a small space and could examine the art objects at their leisure from their rooms at any time. All costs were carefully tracked, including selection of images, further selection of slides by teaching fellows for their class sections, cost of digitization, network connection, etc. Of course, a basic factor that was not held constant was that the number of images under the digital scenario was more than three times greater than under the old system. The costs of the digital scenario were 36 percent greater when amortization was assumed to be carried out over a six-year period; breakeven between the two methods occurred over a sixteen-year amortization period. In the short run therefore, the digital scenario was substantially more expensive. But in a hypothetical scenario in which the digital approach also used only 400 images, it was 6 percent less expensive than the older approach. But in another hypothetical scenario in which the teaching fellows selected their own images for class sections (instead of accepting the head teaching fellow’s selections), the digital scenario obtained a 44 percent cost advantage. All this indicates that even in something relatively straightforward, such as measuring dollar costs, we need to be extremely precise in defining what scenarios are being compared. The picture with respect to the amount of learning that took place was ambiguous. One teaching fellow reported that students liked the digital approach, but they could not be said to have learned more or to have submitted better written work; another teaching fellow thought the same, with the qualification that the students seemed to learn more easily. But the head teaching fellow reported that student test performance on visual recognition was much improved over past years, and another teaching fellow claimed that the students learned more and wrote better papers. 365 quandt/paper or digital? While the evidence from the Bennett study is somewhat ambiguous, this is precisely the kind of information that one would like to obtain from a whole range of teaching and research innovations. If a digital archive of first folios and quartos were accessible, would papers on textual variants of Shakespeare plays be better and more comprehensive and definitive, or would it merely take less time to write them (because the scholar would no longer have to make repeated trips to Oxford, Wrocław, and other places)? Would research on library acquisition policies be more authoritative by virtue of the fact that library data are broadly available on the Web?20 These are the types of questions to which answers have generally not yet been forthcoming, and yet without which the question of the impact of information technology on productivity in academic endeavors cannot be decided. Pricing and Related Issues While some of the early enthusiasts of electronic journals believed that it would cost only perhaps 25 percent of the corresponding paper journal costs to produce an electronic journal and that many could actually be distributed free of charge (as, for example, Ginsparg’s preprint server in highenergy physics), the actual experience is different, and MacKie-Mason, Riveros, Bonn, and Lougee observe that “Pricing electronic access to scholarly information is far from being a well-understood practice” (1999). They report on the basis of a sample of journals and publishers that in cases where a paper version and an electronic version coexist and where the publishers charge a single combined price, the surcharge over the price of the paper version ranges from 8 percent to 65 percent. They also report that half the publishers in the sample offer the electronic version by itself at a price ranging from 65 percent to 150 percent of the paper version, with the most frequent price being 90–100 percent of the paper version. There are two primary reasons that the pricing of information goods is complicated: (1) publishers can practice price discrimination, i.e., sell the same good to different consumers at different prices, and (2) publishers can bundle different units of information goods in a single “bundle.” The former is commonly practiced by publishers who charge different subscription prices to libraries and to individuals; such a system requires for its effectiveness that there be no easy way in which individuals can undercut publishers by reselling to libraries, which in fact is the case. Varian (1995) uses the following simple example to illustrate this phenomenon. Imagine that it costs $7 to produce the first copy of a journal and nothing to produce the second (zero marginal cost); further imagine that consumer A values the journal at $5 and consumer B at $3. There is no single price at which production costs can be covered: if the journal is priced at $5, only A will buy and revenues are $5, if it is priced at $3, both will buy, but revenues are only $6. To be able to produce the journal profitably, the publisher must be able to sell at different prices to A and B. 366 library trends/winter 2003 Varian’s example of bundling assumes that there are two journals, valued at, say, $5 and $3 respectively by one consumer, with another consumer valuing them at $3 and $5 respectively. If the publisher (of the two journals) sells them for $3 each, both consumers will buy both journals, and total revenue will be $12 (one copy each of the two journals sold to the two consumers). But if the publisher bundles them, i.e., ties them together as a package, and sells the package for $8, again both consumers will buy both journals by purchasing the bundled package, but revenue now is $16. Note that units of scholarly information are not peculiar in permitting bundling: Adams and Yellen note in one of the earliest papers on bundling that “Commodity bundling . . . occurs when firms sell the same physical commodity in different container sizes (1976).” In fact, the standard journal issue, containing perhaps a dozen articles, is itself a bundled commodity in which articles on different subjects and appealing to different readers are presented as a bundle, which means in effect that the subscriber does not have the choice of purchasing only those articles in which he/she has a particular interest. It is a fact that a bewildering array of online and paper journals exist, which, depending on the provider or vendor, may be obtainable in paper version alone, online version alone, as a combination of these two, with the subscriber choosing what journal to subscribe to in some cases and being given no such choice in others, each with a different individual and institutional subscription rate. Packages of various kinds are offered to subcribers or members, as the case may be, by universities (HighWire Press21), learned societies (Max Planck Gesellschaft22), for-profit publishers (Wiley Interscience,23 Kluwer Online,24 Elsevier25) and not-for-profit organizations (JSTOR26, Open Society Institute’s eIFL initiative27). The scale of these operations can be seen from the number of articles or journals to which these initiatives provide access (as of July 31, 2001): HighWire Press 1,048,802 articles, Wiley Interscience 300 journals, Kluwer Online 600 journals, eIFL 3,200 journals and 1,300 full-text reference books accessed by some 2,500 institutions in 39 countries, JSTOR 1,301,259 articles or 266 journals. The advantages of bundling information goods for consumers as well as their producers derives from the fact that different users have different reservation prices for individual journals, depending on the journals’ and scholars’ academic specialization. One scholar might value very highly articles on monopoly pricing and other microeconomic topics but place a low value on one on macroeconomics, while another might have a diametrically opposed valuation. If one assumes that there is a distribution of valuations over journals among consumers and if these distributions are independent of one another and journals are bundled, there will be relatively few consumers who have a very low or very high valuation for the bundle as a whole and many more consumers will have an “in-between” kind of valuation,28 which has the consequence of changing the demand curve for journals into a demand curve for bundles which has a very flat (elastic) portion at a large 367 quandt/paper or digital? intermediate range of quantity values; thus, setting the price at a level that corresponds to where this flat range of the demand curve occurs tends to extract for the producer most of the consumer surplus (i.e., the excess of valuations that some consumers place over and above the price) and leaves almost no deadweight loss (i.e., the sum of consumer valuations in excess of marginal cost by those who could not afford to buy at the prevailing price) (Bakos & Brynjolfsson, 2000). Bundling therefore appears to be an extremely attractive strategy for producers. Chuang and Sirbu (2000) have a more elaborate model in which consumer preferences for individual items are characterized by three parameters: the valuation placed on the most preferred items, the rate at which valuation falls off for subsequent items, and an economies-of-scale parameter that describes how the marginal cost of producing the bundled good relates to the marginal cost of individual items in the bundle. Their model reveals that the choice between pure bundling, pure unbundling, and mixed bundling (a situation in which the publisher offers a bundled good and also permits individual items to be purchased) is more complicated. If there are n goods, pure bundling does not necessarily dominate pure unbundling and, if marginal cost is nonzero, pure bundling is a dominant strategy only if economies of scale are significant enough in producing the bundle and if marginal cost is not too high relative to the readers’ valuation of items. But mixed bundling is always a dominant strategy and is a socially desirable alternative, particularly in cases in which, with pure bundling, consumers might be forced to consume items that they value less than marginal cost. But not everybody is enthusiastic about bundled scholarly material. The biggest objection raised by Kenneth Frazier (2001) is that once the “big deal” (i.e., the bundle) is accepted, a library can no longer cancel its subscription to a particular electronic journal (although it can cancel the paper version), whereas if it had not accepted the bundle, it could have selectively subscribed to electronically available journals. A second objection to purchasing bundled journals is that it disintermediates serials vendors and enjoins libraries from sharing electronic content with outsiders, although this hardly seems significant. Frazier ends with a clarion call to librarians to invest in “bold new experiments” of a nonprofit variety, such as MIT’s CogNet, Columbia University’s Earthscape (and by implication CIAO), and others of this kind, although a number of these initiatives are both broader in their approach (since they often encompass working papers and other materials in addition to journals) and also narrower in field coverage and may not provide full text for the journals. There is no doubt that there has been considerable interest in starting new, not-for-profit ventures that might compete with the commercial electronic journals, but some of them have been slow to get established, and it is fairly clear that their success will probably be directly proportional to the extent to which they can carve out for themselves a well-defined, unique, and not-too-broad 368 library trends/winter 2003 niche. In any event, creating competition for the existing, commercially produced journals is desirable and some significant attempts in this direction have been undertaken, to wit, by The Scholarly Publishing and Academic Resources Coalition (SPARC) which supports competition among high-priced prestige journals in the belief that “1) if authors have superior alternatives to existing high-priced journals, they will ultimately move to the outlet that better satisfied their need for both recognition and broad dissemination, and 2) if publishers have market support for bold (but inherently risky) new ventures, they are more likely to make the investment. ( Johnson, 1999)” Full SPARC membership by libraries costs $5,000 in dues, plus $7,500 in purchase commitments for journals, which number fourteen at the present time. SPARC is also affiliated with other, broader scientific information sources such as MIT’s CogNet, Columbia’s Earthscape, BioOne, eScholarship, and Cornell’s Project Euclid. It is further encouraging that LIBER, the principal association of research libraries in Europe, voted to become SPARC’s arm in Europe and will be joined by several organizations such as JISC ( Joint Information Systems Committee) and CURL (Consortium of University Research Libraries) in the UK. Public discontent with the existing commercial systems is palpable, and over 22,000 life scientists have signed an open letter stating that “they will publish in, review, and subscribe to only those journals that agree to make the contents of their titles available for free on a publicly accessible server . . . within six months of publication” (Case, 2001). Another case in point may be the Electronic Society for Social Scientists (ELSSS), which is in the process of trying to establish itself, but it is difficult to predict whether it will be successful.29 In particular, ELSSS would pay authors $500 for an article and referees $200–$250, but a typical subscription to its Review of Banking and Finance would sell for an annual $500 in contrast with Elsevier’s Journal of Banking and Finance, which costs an annual $1,066. While it makes good sense for efforts such as SPARC or ELSSS to seek well-defined niches, and SPARC in particular is not neglecting the cultivation of demand for its journals, it is difficult to understand in ventures of this type how libraries would be induced to substitute the ELSSS journal for the established Elsevier journal. Two significant efforts have been carried out in the 1990s to investigate the technology of delivering journal material to the scholar’s workstation and the usage of electronically available journals. Both rested on a collaboration between Elsevier and a number of universities, among which the University of Michigan was primus inter pares. The first of these, The University Licensing Program (TULIP) started in early 1991 and ended in late 1995 (TULIP, 1996). In addition to Michigan, other participating institutions were Carnegie-Mellon University, Cornell, Georgia Institute of Tech369 quandt/paper or digital? nology, MIT, University of California, University of Tennessee, University of Washington, and the Virginia Polytechnic Institute and State University. The objective of TULIP was to provide the participating institutions scanned page images and OCR-based ASCII full text of forty-three journals in material science although another source claims initially forty and ultimately eighty-three journals (Hunter, 2000). The project actually started in 1993 and ultimately some 500,000 articles were produced by the system. It was clearly an early project in this class; the files were provided to the participating universities and they themselves designed the software with which users could access them. It is clear that a great deal of technical information was gained through TULIP and its limitations partially reflected the limited nature of the information infrastructure in place; so much so that eventually Internet distribution of files was replaced by CD-ROM-based distribution. Browsers were not really available at the outset and both hardware and software represented significant obstacles to the effective use of the database. Storage in those days was much more expensive than today and the database was inherently small, i.e., did not have critical mass; as a result, penetration at various universities was rather small. Defining penetration as the percentage of the eligible users who were repeat users, Carnegie Mellon achieved a penetration of 8–12 percent in 1995; Georgia Tech, 50 percent; MIT, 8–9 percent; and the University of California, 1–2 percent. On the whole, the penetration figures were not very impressive. Furthermore, the economic and usage aspects had not been designed with adequate attention to detail and thus relatively little was learned about the economics of electronic journals (Hunter, 2000). But the most important aspects of TULIP probably were the facts that a significant amount of learning took place by Elsevier as well as by the participating institutions and that Elsevier decided, on the basis of TULIP, to scan all its journals and start a commercial subscription service, Elsevier Electronic Subscriptions (EES), at a price of 35 percent of the paper subscription (135 percent for paper and electronic together). EES then spawned ScienceDirect, which had a three-part fee consisting of a “platform fee” (for developing and maintaining the service), a content fee (basically 15 percent of the paper rate or 90 percent of the paper rate for an electronic-only subscription), and a transactional fee: if the institution maintained its level of spending, the content fee declined to 7.5 percent and the institution could get additional articles outside the subscribed titles for free up to a certain allowance, beyond which a transactional fee of $15/article would be charged (Hunter, 2000). The second Elsevier-University of Michigan project was a bold and complex experiment designed to reveal users’ attitudes toward the costs of electronic access. The project, Pricing Electronic Access to Knowledge (PEAK), provided four and a half years’ of content to about 1,200 Elsevier journals over an eighteen-month period at twelve institutions (MacKie-Mason, Riveros, Bonn, & Lougee, 1999; Gazzale & MacKie-Mason). By the end of the 370 library trends/winter 2003 project, it contained 849,371 articles, of which 111,983 had been accessed at least once (Gazzale & MacKie-Mason). Three access methods were provided: (1) Traditional subscriptions, (2) Generalized subscriptions; a single such subscription entitled institutions to have unlimited access to 120 articles from the entire database for a fixed prepayment which would be nonreturnable if fewer than 120 articles were actually accessed, and (3) Individual article accesses on a “pay by the drink” basis. The prices were set so that journal issues would be available under Method 1 for $4 (if the institution already subscribed to the journal) or $4 plus 10 percent of the paper subscription rate (if it did not previously subscribe); articles would be available for $4.56 under Method 2 (if the entire allotment of 120 tokens was used up), and would be accessible for $7 under Method 3. In addition, institutions differed in the nonpecuniary costs that were faced by readers for Methods 2 or 3, consisting variously of login and authentication procedures such as passwords and of the requirement to enter credit card information. Institutions were divided into three groups; one group was offered access by all three methods, another group only by Methods 1 and 3, and the third group only by Methods 2 and 3. Some content was available at zero user cost, such as articles published at least two years prior to the experimental period, articles in journals to which the institution purchased an electronic traditional subscription, and articles previously purchased as part of a generalized subscription. While the conclusions from the study, based on careful logs of usage, are detailed, multifaceted, and complex, it is clear that paid usage declined quite dramatically with increases in the marginal cost of access, whether pecuniary or nonpecuniary. It was possible to calculate the optimal expenditures by an institution on the assumption that the actual usage of articles was completely foreseen; the comparison of the optimal with the actual expenditures suggests that forecasting the type of usage in the first of the two experimental years was rather imperfect but improved significantly in the second year. The principal cause of the error was in overestimating usage, particularly in the category of traditional subscriptions, although six out of nine institutions made the correct adjustment for this type of subscription in the second year. Overall, the study was extremely revealing, particularly in showing that nonpecuniary costs have essentially the same significance as pecuniary ones and that there is a great advantage to users from being able to access the entire database. In particular, the metered, pay-by-the-drink approach places a definite damper on usage and confirms the soundness of the original decision by the designers of JSTOR to make the whole database available for a subscription fee. Concluding Comments Since the early 1990s, transmission and storage capacities have become massively greater and the technology of scanning paper and microfilm and making the scanned images available on the Internet has 371 quandt/paper or digital? grown impressively. It is therefore not surprising that the corpus of electronically available materials has grown substantially and it is safe to predict that we are nowhere near the end of this process. The availability of electronic materials has made access much easier and the low cost of distributing these materials has been particularly beneficial for historically disadvantaged countries and institutions.30 There is no doubt that electronic distribution of scholarly materials is less expensive than the distribution of paper, but the prediction that the entire editorial process, particularly for scholarly journals, will be reengineered, so that journals will become available on the World Wide Web for a fraction of the cost of paper journals or even for free is nowhere near realization. University presses and other academic groups are becoming significant players in electronic niche markets in which they are able to achieve self-sufficiency and to provide access to important materials at lower cost and greater convenience than is possible with the traditional methods; but these initiatives typically do not have the financial resources to mount a frontal attack on the commercial publishers’ high-prestige, top-of-the-line journals.31 In addition, the commercial publishers have responded to the low-entry barriers in the field of electronic publication by making their own products available electronically, thus providing the convenience that the paper journals lack. Under these circumstances, it is extremely unlikely that competition from upstart electronic journals will dislodge existing prestige journals from their dominant position in the near term. The situation is somewhat analogous to the provision of public goods: while it is true that, if editorial boards of leading journals all quit and all scholars refused to submit articles to these journals (mutatis mutandis, if all potential taxpayers voluntarily paid for the provision of national defense), new electronic journals could supplant the existing ones, but no single individual has the incentive to defect. Since commercial publishers now tend to provide their journals in both paper and electronic form, the paper versions may well become less important over time, but it is not evident that commercial publishers have an incentive, at least in the short run, actually to terminate production of paper journals. Hence, the predicted demise of the paper journal and, even more so, of commercial publishers, is vastly exaggerated. And while the quality of access to scholarly information will continue to improve substantially, it is unlikely that the increasing dominance of electronic publications will ease the economic plight of libraries in the short run. Acknowledgments I am indebted to William G. Bowen, Richard Ekman, Paul Donovan, and Roger Schoenfeld for useful comments. The responsibility for errors is mine alone. In particular, the views expressed in this paper are mine alone and not those of The Andrew W. Mellon Foundation. 372 library trends/winter 2003 Notes 1. Richard Ekman, then secretary of the Foundation and a senior program officer, and Richard E. Quandt, senior advisor to the Foundation and at the time Professor of Economics at Princeton University, were asked to provide direction for the program. 2. Somewhat higher prices are reported for 1994 by Carpenter and Alexander. 3. For example, Academic Press Harcourt Ltd. lists the Biochemical and Biophysical Research Communications for an annual rate of $3,999, 2001 Global Print and Electronic (IPL Only) Subscription Rates (Academic Press Harcourt Ltd.). 4. It would have been more appropriate to use the ratio of this difference to the institutional price, which would have proxied the Lerner measure of monopoly power, (P – MC)/P. 5. E.g., in economics, journals such as the American Economic Review, the Quarterly Journal of Economics, or the Journal of Political Economy. 6. See http://scilib.ucsd.edu/sio/guide/prices/prices3.html. 7. See http://www.reed-elsevier.com/. 8. The same rule-of-thumb is employed in Quandt, 1996b. Let xi = 1 or 0 indicate whether the ith journal is or is not selected, ui its usefulness or quality, ci its cost, and B the overall budget. Then a library’s general optimization problem may be expressed as the knapsack problem in integer programming, Maximize u1x1 + u2x2 + . . .+ unxn subject to c1x1 + c2x2 + . . . + cnxn < = B and xi a nonnegative integer for all i. A heuristic, but not necessarily exact solution to this problem is provided by the rule-ofthumb employed. A similar device is used in Weitzman, 1998, in which species (books) are selected to be included in Noah’s Ark (libraries), given their usefulness, diversity, and probability of survival. 9. http://www.gmcc.ab.ca/~supy/lec02.htm. 10. http://www.digitalcentury.com/encyclo/update/comp_hd.html. 11. See discussion of the Boskin Commission Report in David, 2000, p. 59. 12. But Ginsparg has had substantial support from the National Science Foundation. 13. Email on July 26, 1994. 14. “A major advantage of such a system is that the journal can be available for free anytime everyplace that data networks reach. However, the lack of copy editing that is likely to prevail in such a system may not be acceptable. I expect that what editing assistance might be required will not cost anywhere near what print journals cost, and so might be provided by the authors’ institutions. If that happens, electronic journals can also be distributed freely.” (Odlyzko, 1995). 15. Methods that have been suggested include always keeping on hand an inventory of older equipment, designing new equipment so that it can always emulate old equipment, reaching broad agreement on norms and standards, and of course, always producing a paper copy as well. See Ekman, R., “Keynote Address,” Second Annual International Virtual Library Conference, New York, June 3, 1999. But of course, the last of these approaches is bound to raise questions about the permanence of paper. See also Waters, D., “Some Considerations on the Archiving of Digital Information,” http://www.ifla.org/documents/ libraries/net/waters1.htm, January 1995, and Waters, D., & Garrett, J., “Preserving Digital Information: Final Report and Recommendations,” http://www.rlg.org/ArchTF/, 1996. 16. See the next section for a more detailed discussion of bundling. 17. http://www.sciencedirect.com. 18. W. G. Bowen (2000). Emphasis in the original. 19. It is not clear what his attitude would be concerning digital scholarly materials that have never had a paper counterpart, i.e., that are originally created as electronic products. 20. http://fisher.lib.virginia.edu/cgi-local/newarlbin/listyear.pl. 21. http://highwire.stanford.edu. 22. http://www.biochem.mpg.de/zb/elpubl.html. 23. http://www3.interscience.wiley.com/about.html#basic. 24. http://www.wkap.nl/kaphtml.htm/KODETAILS. 25. http://www.sciencedirect.com. 373 quandt/paper or digital? 26. http://www.jstor.org.27. http://soros.epnet.com/pr092199.html.28. A consequence of computing the distribution of the sum of random variables by convo-luting the underlying densities.29. http://www.elsss.org.uk/.30. In the first half of the 1990s, various book and journal donation programs were successfulin persuading publishers to make their journals available to East European countries ei-ther free or at a very low cost. But the number of copies in which these free journals weremade available was strictly limited, and it was not conceivable that publishers should offerthese in the hundreds or thousands. See Quandt (2002).31. It is unlikely that JSTOR would have been realized without the substantial resources of TheAndrew W. Mellon Foundation, which underwrote its development. ReferencesAdams, W. J., & Yellen, J. L. (1976 ). Commodity bundling and the burden of monopoly.Quarterly Journal of Economics, 90, 475–498.Baker, N. (2001). Double fold: Libraries and the assault on paper. New York: Random House.Bakos, Y., & Brynjolfsson, E. (2000). Aggregation and disaggregation of information goods:Implications for bundling, site licensing, and micropayment systems. In Kahin, B. &Varian, H. R. (Eds.), Internet publishing and beyond: The economics of digital information andintellectual property (pp. 114–137). Cambridge, MA: The MIT Press.Baumol, W. J., & Bowen, W. G. (1966). Performing arts: The economic dilemma. New York: 20thCentury Fund.Bennett, S. (1999). Information-based productivity. In Ekman, R. & Quandt, R. E. (Eds.),Technology and Scholarly Communication (pp. 73–94). Berkeley, CA: University of Califor-nia Press.Blinder, A. S., & Quandt, R. E. (1997, December). The computer and the economy. AtlanticMonthly, 26–32.Borgman, C. L. (2000). From Gutenberg to the global information structure. Cambridge, MA: MITPress.Bowen, W. G. (1999). Preface. In Ekman, R. & Quandt, R. E. (Eds.), Technology and scholarlycommunication (p. ix.). Berkeley, CA: University of California Press.Bowen, W. G. (2000). At a slight angle to the universe: The university in a digitized, commer-cialized age. The Romanes Lecture for 2000, University of Oxford, October 17, 2000.Brynjolfsson, E., & Hitt, L. M. (2000). Beyond computation: Information technology, organi-zational transformation and business performance. Journal of Economic Perspectives, 14(4),25–48.Carpenter, K. H., & Alexander, A. W. (1994). U. S. periodical price index for 1994. AmericanLibraries, 25(5), 450–452.Case, M. M. (2001). Public Access to Scientific Information: Are 22,700 scientists wrong? Adiscussion of librarians’ role in the public library of science. Retrieved from http://www.arl.org/sparc/core/index.asp?page=g19#6.Charnes, A., Cooper, W. W., Lewin, A. Y., & Seiford, L. M. (1994). Data Envelopment Analysis:Theory, methodology, and application. Boston: Kluwer.Chressanthis, G. A., & Chressanthis, J. D. (1994a). The determinants of library subscriptionprices of the top-ranked economics journals: An econometric analysis. Journal of Econom-ic Education, (Fall), 367–382.Chressanthis, G. A., & Chressanthis, J. D. (1994b). A general econometric model of the de-terminants of library subscription prices of scholarly journals: The role of exchange raterisk and other factors. Library Quarterly, 63(3), 270–293.Chuang, J. C.-I., & Sirbu, M. A. (2000). Network delivery of information goods: Optimal pric-ing of articles and subscriptions. In Kahin, B. & Varian, H. R. (Eds.), Internet publishingand beyond: The economics of digital information and intellectual property (pp. 138–166). Cam-bridge, MA: The MIT Press.Computer History and Development. (n. d.) Jones Telecommunications and Multimedia Encyclo-pedia, Retrieved October 20, 2002, from http://www.digitalcentury.com/encyclo/update/comp_hd.html. 374 library trends/winter 2003 Cummings, A. M., Witte, M. L., Bowen, W. G., Lazarus, L. O., & Ekman, R. (1992). Universitylibraries and scholarly communication. Washington, DC: The Association of Research Librar-ies.David, P. A. (2000). Understanding digital technology’s evolution and the path of measuredproductivity growth: Present and future in the mirror of the past. In Brynjolfsson, E. &Kahin, B. (Eds.), Understanding the digital economy: Data, tools, and research (pp. 49–95).Cambridge, MA: The MIT Press.Ekman, R. (1999). Keynote Address. Second Annual International Virtual Library Conference, June3, 1999, New York.Ekman, R., & Quandt, R. E. (1993). Potential uses of technology in scholarly publishing andresearch libraries. (Working paper, December 13). New York: The Andrew W. MellonFoundation.Ekman, R., & Quandt, R. E. (1995). Scholarly communication, academic libraries, and tech-nology. Change, January/February, 34–44.Ekman, R., & Quandt, R. E. (Eds.). (1999). Technology and scholarly communication. Berkeley,CA: University of California Press.Frazier, K. (2001). The librarians’ dilemma: Contemplating the costs of the ‘big deal.’ D-LibMagazine, 7(3), March. Retrieved from http://www.dlib.org/dlib/march01/frazier/03frazier.html.Fuchs, I. H. (2001). Prospects and possibilities in the digital age. Proceedings of the AmericanPhilosophical Society, 145(1), 45–53.Gazzale, R. S., & MacKie-Mason, J. K. (n.d.) System design, user cost and electronic usage of jour-nals. Retrieved from http://faculty.si.umich.edu/~jmm/peak-book/peak-chapters.htm.Getz, M. (1999). Electronic publishing in academia: an economic perspective. In Ekman, R.& Quandt, R. E. (Eds.), Technology and scholarly communication (pp. 102–132). Berkeley,CA: University of California Press.Halliday, L., & Oppenheim, C. (2000). Economic models of digital-only journals. PEAK Con-ference, March 23–24, University of Michigan. Retrieved from http://www-personal.si.umich.edu/~jmm/peak-book/drafts/Halliday/halliday.pdf.Harnad, S. (1994). Scholarly journals at the crossroads: A subversive proposal for electronicpublishing. Retrieved from http://www.arl.org/scomm/subversive/sub01.html.Hayes, R. M. (2000). Staffing patterns for academic libraries of Central and Eastern Europe,Russia, and CIS countries. In Lass, A. & Quandt, R. E. (Eds.), Library automation in tran-sitional societies: Lessons from Eastern Europe (pp. 374–398). New York: Oxford UniversityPress.Hunter, K. (2000). PEAK and Elsevier Science. PEAK Conference, Ann Arbor, MI, March 23–24. Retrieved from http://faculty.si.umich.edu/~jmm/peak-book/peak-chapters.htm.Jackson, M. E. (1997). Measuring the performance of interlibrary loan and document deliv-ery services. Retrieved from http://www.arl.org/newsltr/195/illdds.html.Johnson, R. K. (1999). Competition: A unifying ideology for change in scholarly communica-tions. Retrieved from http://www.arl.org/newsltr/203/competition.html.Ketcham, L., & Born, K. (1994). Projecting serials costs: Banking on the past to buy for thefuture. Library Journal, 119(7), 44–50.Kyrillidou, M. (1995). Trends in research library acquisitions and ILL services. ARL, May, 3–4.Lynden, F. C. (1993). Tracking serials costs with computer technology. Publishing ResearchQuarterly, 9(1), 63–81.MacKie-Mason, J. K., Riveros, J. F., Bonn, M. S., & Lougee, W. P. (1999). A report on the PEAKexperiment: Usage and economic behavior. D-LIB Magazine, 5(7–8), ( July/August). Re-trieved from http://www.dlib.org/dlib/july99/mackie-mason/07mackie-mason.html.MacKie-Mason, J. K., & Varian, H. R. (1993). Some economics of the Internet. Tenth Michi-gan Public Utility Conference, March 25–27, Western Michigan University, Kalamazoo.McCabe, Mark J. (2000). Academic journal pricing and market power: A portfolio approach.(Working paper, July). Georgia Institute of Technology, Atlanta, GA.McCabe, Mark J. (1998). The impact of publisher mergers on journal prices: A preliminaryreport. Retrieved from http://www.arl.org/newsltr/200/mccabe.html.McCabe, Mark J. (2002). Journal pricing and mergers: A portfolio approach. American EconomicReview, 92(1), 259–269. 375quandt/paper or digital? Meyer, R.W. (2000). The electronic library of the Associated Colleges of the South: A strategyto overcome the high cost of scholarly literature and to identify overpriced journalsthrough collective access versus ownership. (Working paper) Trinity University, SanAntonio, TX.Miller, C., & Tegler, P. (1988). An analysis of interlibrary loan and commercial document supplyperformance. Library Quarterly, 58(4), 352–366.Moulton, B. R. (2000). GDP and the digital economy: Keeping up with the changes. In Bryn-jolfsson, E. & Kahin, B. (Eds.), Understanding the digital economy: Data, tools, and research(pp. 34–48). Cambridge, MA: The MIT Press.Noll, R. G. (1996). The economics of scholarly publications and the information superhigh-way. Brookings Discussion Papers in Domestic Economics, No. 3, May.Noll, R. G., & Steinmueller, W. E. (1992). An economic analysis of scientific journal prices:Preliminary results. Serials Review, 19, 32–37.Odlyzko, A. M. (1995). Tragic loss or good riddance? The impending demise of traditionalscholarly journals. International Journal of Human Computer Studies, 42, 71–122; also re-trieved from http://www.research.att.com/~amo. This paper has both a full and a con-densed version.Odlyzko, A. M. (1999). The economics of electronic journals. In Ekman, R. & Quandt, R. E.(Eds.), Technology and scholarly communication (pp. 380–393). Berkeley, CA: University ofCalifornia Press.Peterson, H. C. (1989). Variations in journal prices: A statistical analysis. The Serials Librarian,17(1/2), 1–9.Peterson, H. C. (1990). University libraries and pricing practices by publishers of scholarlyjournals. Research in Higher Education, 31(4), 307–314.Peterson, H. C. (1992). The economics of economics journals: A statistical analysis of pricingby publishers. College & Research Libraries, 53(2), 176–181.Quandt, R. E. (1996a). Electronic publishing and virtual libraries: issues and an agenda forthe Andrew W. Mellon Foundation. Serials Review (Summer), 9–24.Quandt, R. E. (1996b). A simulation model for journal subscription by libraries. Journal of theAmerican Society for Information Science, 47(8), 610–617.Quandt, R. E. (2002). The changing landscape in Eastern Europe: A personal perspective on philan-thropy and technology transfer. New York: Oxford University Press.Ritchie, D. M. (1984). The evolution of the UNIX time-sharing system. AT&T; Bell Laborato-ries Technical Journal, 63(6) Part 2, (October), 1577–1593; also at http://cm.bell-labs.com/cm/cs/who/dmr/hist.html.Rowland, F. (1997). Print journals: Fit for the future? Ariadne, 7, January. Retrieved from http://www.ariadne.ac.uk/issue7/fytton.Sichel, D. E. (1997). The computer revolution: An economic perspective. Washington, DC: Brook-ings Institution Press.TULIP Final Report (1996). Retrieved from http://www.elsevier.nl/homepage/about/resproj/tulip.shtml.Varian, H. R. (1995). Pricing information goods. Retrieved from http://www.sims.berkeley.edu/~hal/people/hal/papers.html.Waters, D. (1995a). The social organization of archiving digital information. Retrieved fromhttp://www.ifla.org/documents/libraries/net/waters2.htm.Waters, D. (1995b). Some considerations on the archiving of digital information. Retrievedfrom http://www.ifla.org/documents/libraries/net/waters1.htm.Waters, D. & Garrett, J. (1996). Preserving digital information: Final report and recommen-dations. Retrieved from http://www.rlg.org/ArchTF/.Weitzman, M. L. (1998). The Noah’s ark problem. Econometrica, 66, 1279–1298.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Scholarly work and the shaping of digital access

works within the stores of recorded knowledge. In the digital environment scholars are seeking and using information in new ways and generating new types of scholarly products, many of which are specialized resources for access to research information. These practices have important implications for the collection and organization of digital access resources. Drawing on a series of qualitative ...

متن کامل

Integrating research data into the publication workflow: the eBank UK experience

E-science has the potential to enrich scholarly communication from the perspective of both research and learning. This paper will explore emerging infrastructure that will enable effective curation of data to deliver this potential. The exponential growth in digital data arising from e-science requires new modes of data curation. The large volume of digital data now being created calls for new ...

متن کامل

Copyright, academic research and libraries: balancing the rights of stakeholders in the digital age

Following an overview of the historical context of copyright legislation, this paper discusses copyright within the scholarly communication process and the role of libraries in providing access to copyright materials in the digital age. The argument is made that the balance of “rights” and “exceptions” that has been maintained for 300 years needs to be reconsidered for scholarly communications,...

متن کامل

Research Data Explored II: the Anatomy and Reception of figshare

Introduction We are currently witnessing a change in scholarly communication. Next to the paper, complementary materials, such as research data, source code, and images are regarded as important outcomes that should be shared and built upon (Kraker et al., 2011). In this new ecosystem, many archives have been established that cater to the needs of a digital and open science. With the increasing...

متن کامل

All aboard: toward a machine-friendly scholarly communication system

his sentence, which we used for effect in numerous conference presentations and eventually fully articulated in a 2004 paper [1], is still by and large true. Although scholarly publishers have adopted new technologies that have made access to scholarly materials significantly easier (such as the Web and PDF documents), these changes have not realized the full potential of the new digital and ne...

متن کامل

Preserving Scholarly E-Journals

For research libraries, the long-term preservation of digital collections may well be the most important issue in digital libraries. In certain ways, digital materials are incredibly fragile, dependent for their continued utility upon technologies that undergo rapid and continual change. In the world of physical research materials, a great number of valuable research resources have been saved p...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:
  • Library Trends

دوره 51  شماره 

صفحات  -

تاریخ انتشار 2003